Adversarial Bone Length Attack on Action Recognition
نویسندگان
چکیده
Skeleton-based action recognition models have recently been shown to be vulnerable adversarial attacks. Compared attacks on images, perturbations skeletons are typically bounded a lower dimension of approximately 100 per frame. This lower-dimensional setting makes it more difficult generate imperceptible perturbations. Existing resolve this by exploiting the temporal structure skeleton motion so that perturbation increases thousands. In paper, we show can performed skeleton-based models, even in significantly low-dimensional without any manipulation. Specifically, restrict lengths skeleton's bones, which allows an adversary manipulate only 30 effective dimensions. We conducted experiments NTU RGB+D and HDM05 datasets demonstrate proposed attack successfully deceived with sometimes greater than 90% success rate small Furthermore, discovered interesting phenomenon: our setting, training bone length shares similar property data augmentation, not improves robustness but also classification accuracy original data. is counterexample trade-off between clean accuracy, has widely observed studies high-dimensional regime.
منابع مشابه
Adversarial Attacks on Image Recognition
The purpose of this project is to extend the work done by Papernot et al. in [4] on adversarial attacks in image recognition. We investigated whether a reduction in feature dimensionality can maintain a comparable level of misclassification success while increasing computational efficiency. We formed an attack on a black-box model with an unknown training set by forcing the oracle to misclassif...
متن کاملTactics of Adversarial Attack on Deep Reinforcement Learning Agents
We introduce two tactics, namely the strategicallytimed attack and the enchanting attack, to attack reinforcement learning agents trained by deep reinforcement learning algorithms using adversarial examples. In the strategically-timed attack, the adversary aims at minimizing the agent’s reward by only attacking the agent at a small subset of time steps in an episode. Limiting the attack activit...
متن کاملASP: A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction
With the excellent accuracy and feasibility, the Neural Networks (NNs) have been widely applied into the novel intelligent applications and systems. However, with the appearance of the Adversarial Attack, the NN based system performance becomes extremely vulnerable: the image classification results can be arbitrarily misled by the adversarial examples, which are crafted images with human unperc...
متن کاملAdversarial Label Flips Attack on Support Vector Machines
To develop a robust classification algorithm in the adversarial setting, it is important to understand the adversary’s strategy. We address the problem of label flips attack where an adversary contaminates the training set through flipping labels. By analyzing the objective of the adversary, we formulate an optimization framework for finding the label flips that maximize the classification erro...
متن کاملLearning to Attack: Adversarial Transformation Networks
With the rapidly increasing popularity of deep neural networks for image recognition tasks, a parallel interest in generating adversarial examples to attack the trained models has arisen. To date, these approaches have involved either directly computing gradients with respect to the image pixels or directly solving an optimization on the image pixels. We generalize this pursuit in a novel direc...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2022
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v36i2.20132